local minimax complexity
Local Minimax Complexity of Stochastic Convex Optimization
We extend the traditional worst-case, minimax analysis of stochastic convex optimization by introducing a localized form of minimax complexity for individual functions. Our main result gives function-specific lower and upper bounds on the number of stochastic subgradient evaluations needed to optimize either the function or its ``hardest local alternative'' to a given numerical precision. The bounds are expressed in terms of a localized and computational analogue of the modulus of continuity that is central to statistical minimax analysis. We show how the computational modulus of continuity can be explicitly calculated in concrete cases, and relates to the curvature of the function at the optimum. We also prove a superefficiency result that demonstrates it is a meaningful benchmark, acting as a computational analogue of the Fisher information in statistical estimation. The nature and practical implications of the results are demonstrated in simulations.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Reviews: Local Minimax Complexity of Stochastic Convex Optimization
While I would certainly agree with the claim that developing local minimax bounds is of interest, I feel that the interest of considering risks for 2-point subproblems is unclear. Note that many (probably, a majority of) known bounds in convex stochastic optimization are obtained on 2-point subproblems (see, e.g. In other words, the minimax risks (up to an absolute factor) for general classes of convex problems are attained already on the hardest 2-point subproblems (1-parametric families), so that, in the paper notation, R_T(\cal F)\sim \sup_{f\in \cal F} R_T(f, \cal F) On the other hand, it is unclear for me if the notion of R_T(f, \cal F) is of interest for its own sake (not as just a technical tool to construct a more general lower bound). For instance, it is not clear why and when the "2-point risk" R_T(f, \cal F) is an adequate measure of local complexity. Note that this is generally not the case in nonparametric estimation (except for few situations such as the case of linear functional estimation in [4-6]).
Local Complexity of Stochastic Convex Optimization
We extend the traditional worst-case, minimax analysis of stochastic convex optimization by introducing a localized form of minimax complexity for individual functions. Our main result gives function-specific lower and upper bounds on the number of stochastic subgradient evaluations needed to optimize either the function or its "hardest local alternative" to a given numerical precision. The bounds are expressed in terms of a localized and computational analogue of the modulus of continuity that is central to statistical minimax analysis. We show how the computational modulus of continuity can be explicitly calculated in concrete cases, and relates to the curvature of the function at the optimum. We also prove a superefficiency result that demonstrates it is a meaningful benchmark, acting as a computational analogue of the Fisher information in statistical estimation. The nature and practical implications of the results are demonstrated in simulations.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > New York (0.04)
- (2 more...)
Local Minimax Complexity of Stochastic Convex Optimization
chatterjee, sabyasachi, Duchi, John C., Lafferty, John, Zhu, Yuancheng
We extend the traditional worst-case, minimax analysis of stochastic convex optimization by introducing a localized form of minimax complexity for individual functions. Our main result gives function-specific lower and upper bounds on the number of stochastic subgradient evaluations needed to optimize either the function or its hardest local alternative'' to a given numerical precision. The bounds are expressed in terms of a localized and computational analogue of the modulus of continuity that is central to statistical minimax analysis. We show how the computational modulus of continuity can be explicitly calculated in concrete cases, and relates to the curvature of the function at the optimum. We also prove a superefficiency result that demonstrates it is a meaningful benchmark, acting as a computational analogue of the Fisher information in statistical estimation.
Local Minimax Complexity of Stochastic Convex Optimization
chatterjee, sabyasachi, Duchi, John C., Lafferty, John, Zhu, Yuancheng
We extend the traditional worst-case, minimax analysis of stochastic convex optimization by introducing a localized form of minimax complexity for individual functions. Our main result gives function-specific lower and upper bounds on the number of stochastic subgradient evaluations needed to optimize either the function or its ``hardest local alternative'' to a given numerical precision. The bounds are expressed in terms of a localized and computational analogue of the modulus of continuity that is central to statistical minimax analysis. We show how the computational modulus of continuity can be explicitly calculated in concrete cases, and relates to the curvature of the function at the optimum. We also prove a superefficiency result that demonstrates it is a meaningful benchmark, acting as a computational analogue of the Fisher information in statistical estimation. The nature and practical implications of the results are demonstrated in simulations.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > New York (0.04)
- (2 more...)